To ensure proper knowledge representation of the kitchen environment, it is vital for kitchen robots to recognize the states of the food items that are being cooked. Although the domain of object detection and recognition has been extensively studied, the task of object state classification has remained relatively unexplored. The high intra-class similarity of ingredients during different states of cooking makes the task even more challenging. Researchers have proposed adopting Deep Learning based strategies in recent times, however, they are yet to achieve high performance. In this study, we utilized the self-attention mechanism of the Vision Transformer (ViT) architecture for the Cooking State Recognition task. The proposed approach encapsulates the globally salient features from images, while also exploiting the weights learned from a larger dataset. This global attention allows the model to withstand the similarities between samples of different cooking objects, while the employment of transfer learning helps to overcome the lack of inductive bias by utilizing pretrained weights. To improve recognition accuracy, several augmentation techniques have been employed as well. Evaluation of our proposed framework on the `Cooking State Recognition Challenge Dataset' has achieved an accuracy of 94.3%, which significantly outperforms the state-of-the-art.
translated by 谷歌翻译
自动许可板识别系统旨在提供从视频帧中出现的车辆检测,本地化和识别车牌字符的解决方案。但是,在现实世界中部署此类系统需要在低资源环境中实时性能。在我们的论文中,我们提出了一种双级检测管线与视觉API配对,提供实时推理速度以及始终如一的准确检测和识别性能。我们使用Haar-Cascade分类器作为骨干MobileNet SSDv2检测模型顶部的过滤器。这仅通过专注于高置信度检测并使用它们来识别来减少推理时间。我们还施加了一个时间帧分离策略,以区分同一夹子中的多个车辆牌照。此外,没有公开的Bangla许可证板数据集,我们创建了一个图像数据集和野外包含许可板的视频数据集。我们在图像数据集上培训了模型,并达到了86%的AP(0.5)得分,并在视频数据集上测试了我们的管道,并观察到合理的检测和识别性能(82.7%的检测率,60.8%OCR F1得分)具有真实 - 时间处理速度(每秒27.2帧)。
translated by 谷歌翻译
Automatic medical image classification is a very important field where the use of AI has the potential to have a real social impact. However, there are still many challenges that act as obstacles to making practically effective solutions. One of those is the fact that most of the medical imaging datasets have a class imbalance problem. This leads to the fact that existing AI techniques, particularly neural network-based deep-learning methodologies, often perform poorly in such scenarios. Thus this makes this area an interesting and active research focus for researchers. In this study, we propose a novel loss function to train neural network models to mitigate this critical issue in this important field. Through rigorous experiments on three independently collected datasets of three different medical imaging domains, we empirically show that our proposed loss function consistently performs well with an improvement between 2%-10% macro f1 when compared to the baseline models. We hope that our work will precipitate new research toward a more generalized approach to medical image classification.
translated by 谷歌翻译
语言,视觉和多模式预审查的大量融合正在出现。在这项工作中,我们介绍了通用多模式基础模型BEIT-3,该模型BEIT-3,该模型在视觉和视觉任务上都实现了最新的转移性能。具体来说,我们从三个方面提出了大融合:骨干架构,预训练任务和模型扩展。我们介绍了多道路变压器进行通用建模,其中模块化体系结构可以实现深融合和模态特定的编码。基于共享的骨干,我们以统一的方式对图像(Imglish),文本(英语)和图像文本对(“平行句子”)进行蒙面的“语言”建模。实验结果表明,BEIT-3在对象检测(COCO),语义分割(ADE20K),图像分类(Imagenet),视觉推理(NLVR2),视觉询问答案(VQAV2),图像字幕上获得最先进的性能(可可)和跨模式检索(Flickr30k,可可)。
translated by 谷歌翻译
筛选行李X射线扫描的筛选杂乱和闭塞违禁品,即使对于专家的安全人员而言,甚至是一个繁琐的任务。本文提出了一种新的策略,其扩展了传统的编码器 - 解码器架构,以执行实例感知分段,并在不使用任何附加子网络或对象检测器的情况下执行违反互斥项的合并实例。编码器 - 解码器网络首先执行传统的语义分割,并检索杂乱的行李物品。然后,该模型在训练期间逐步发展,以识别各个情况,使用显着减少的训练批次。为了避免灾难性的遗忘,一种新颖的客观函数通过保留先前获得的知识来最小化每次迭代中的网络损失,同时通过贝叶斯推断解决其复杂的结构依赖性。对我们两个公开的X射线数据集的框架进行了全面评估,表明它优于最先进的方法,特别是在挑战的杂乱场景中,同时在检测准确性和效率之间实现最佳的权衡。
translated by 谷歌翻译
A total of 605 eligible respondents took part in this survey (population size 1630046161 and required sample size 591) with an age range of 18 to 100. A large proportion of the respondents are aged less than 50 (82%) and male (62.15%). The majority of the respondents live in urban areas (60.83%). A total of 61.16% (370/605) of the respondents were willing to accept/take the COVID-19 vaccine. Among the accepted group, only 35.14% showed the willingness to take the COVID-19 vaccine immediately, while 64.86% would delay the vaccination until they are confirmed about the vaccine s efficacy and safety or COVID-19 becomes deadlier in Bangladesh. The regression results showed age, gender, location (urban/rural), level of education, income, perceived risk of being infected with COVID-19 in the future, perceived severity of infection, having previous vaccination experience after age 18, having higher knowledge about COVID-19 and vaccination were significantly associated with the acceptance of COVID-19 vaccines. The research reported a high prevalence of COVID-19 vaccine refusal and hesitancy in Bangladesh.
translated by 谷歌翻译
This paper presents our solutions for the MediaEval 2022 task on DisasterMM. The task is composed of two subtasks, namely (i) Relevance Classification of Twitter Posts (RCTP), and (ii) Location Extraction from Twitter Texts (LETT). The RCTP subtask aims at differentiating flood-related and non-relevant social posts while LETT is a Named Entity Recognition (NER) task and aims at the extraction of location information from the text. For RCTP, we proposed four different solutions based on BERT, RoBERTa, Distil BERT, and ALBERT obtaining an F1-score of 0.7934, 0.7970, 0.7613, and 0.7924, respectively. For LETT, we used three models namely BERT, RoBERTa, and Distil BERTA obtaining an F1-score of 0.6256, 0.6744, and 0.6723, respectively.
translated by 谷歌翻译
The existing methods for video anomaly detection mostly utilize videos containing identifiable facial and appearance-based features. The use of videos with identifiable faces raises privacy concerns, especially when used in a hospital or community-based setting. Appearance-based features can also be sensitive to pixel-based noise, straining the anomaly detection methods to model the changes in the background and making it difficult to focus on the actions of humans in the foreground. Structural information in the form of skeletons describing the human motion in the videos is privacy-protecting and can overcome some of the problems posed by appearance-based features. In this paper, we present a survey of privacy-protecting deep learning anomaly detection methods using skeletons extracted from videos. We present a novel taxonomy of algorithms based on the various learning approaches. We conclude that skeleton-based approaches for anomaly detection can be a plausible privacy-protecting alternative for video anomaly detection. Lastly, we identify major open research questions and provide guidelines to address them.
translated by 谷歌翻译
Adversarial training is an effective approach to make deep neural networks robust against adversarial attacks. Recently, different adversarial training defenses are proposed that not only maintain a high clean accuracy but also show significant robustness against popular and well studied adversarial attacks such as PGD. High adversarial robustness can also arise if an attack fails to find adversarial gradient directions, a phenomenon known as `gradient masking'. In this work, we analyse the effect of label smoothing on adversarial training as one of the potential causes of gradient masking. We then develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA). Our attack approach is based on a `match and deceive' loss that finds optimal adversarial directions through guidance from a surrogate model. Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size. Furthermore, our proposed G-PGA is generic, thus it can be combined with an ensemble attack strategy as we demonstrate for the case of Auto-Attack, leading to efficiency and convergence speed improvements. More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
translated by 谷歌翻译
Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译